update equation
- North America > United States > California > San Francisco County > San Francisco (0.14)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > Maryland > Prince George's County > College Park (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States (0.05)
- North America > Canada > Quebec > Montreal (0.05)
- Asia > China > Beijing > Beijing (0.05)
- (7 more...)
- North America > United States > California > San Francisco County > San Francisco (0.14)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > Maryland > Prince George's County > College Park (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- Asia > China > Beijing > Beijing (0.04)
- Oceania > Australia > New South Wales > Sydney (0.04)
- (10 more...)
V ai Phy: a Variational Inference Based Algorithm for Phylogeny Appendix A The V aiPhy Algorithm
The update equations of V aiPhy follow the standard mean-field VI updates. Furthermore, i is the set of nodes except node i, and C is a constant. We utilize the NJ algorithm to initialize V aiPhy with a reasonable state. An example script to run PhyML is shown below. Here we provide two algorithmic descriptions of SLANTIS.
Export Reviews, Discussions, Author Feedback and Meta-Reviews
The authors prove that variational inference in LDA converges to the ground truth model, in polynomial time, for two different case studies with different underlying assumptions about the structure of the data. In this analysis, the authors employ "thresholded" EM updates which estimate the per-topic word distribution based on the subset of documents where a given document dominates. The proofs, which are provided in a 35-page supplement, require assumptions about the number of words in a document that are uniquely associated with each topic, the number of topics per document, and the number documents in which a given word exclusively identifies a topic. I am not enough of a specialist to evaluate the provided proofs in detail, so I will restrict myself to relatively high level comments. Empirically speaking, variational inference can and does get stuck in local maxima.
Reviews: Zap Q-Learning
The paper proposes a variant of Q-learning, called Zap Q-learning, that is more stable than its precursor. Specifically, the authors show that, in the tabular case, their method minimises the asymptotic covariance of the parameter vector by applying approximate second-order updates based on the stochastic Newton-Raphson method. The behaviour of the algorithm is analised for the particular case of a tabular representation and experiments are presented showing the empirical performance of the method in its most general form. This is an interesting paper that addresses a core issue in RL. I have some comments regarding both its content and its presentation.